60 research outputs found
The Gaussian Interference Channel in the Presence of Malicious Jammers
This paper considers the two-user Gaussian interference channel in the
presence of adversarial jammers. We first provide a general model including an
arbitrary number of jammers, and show that its capacity region is equivalent to
that of a simplified model in which the received jamming signal at each decoder
is independent. Next, existing outer and inner bounds for two-user Gaussian
interference channel are generalized for this simplified jamming model. We show
that for certain problem parameters, precisely the same bounds hold, but with
the noise variance increased by the received power of the jammer at each
receiver. Thus, the jammers can do no better than to transmit Gaussian noise.
For these problem parameters, this allows us to recover the half-bit theorem.
In weak and strong interference regime, our inner bound matches the
corresponding Han-Kobayashi bound with increased noise variance by the received
power of the jammer, and even in strong interference we achieve the exact
capacity. Furthermore, we determine the symmetric degrees of freedom where the
signal-to-noise, interference-to-noise and jammer-to-noise ratios are all tend
to infinity. Moreover, we show that, if the jammer has greater received power
than the legitimate user, symmetrizability makes the capacity zero. The proof
of the outer bound is straightforward, while the inner bound generalizes the
Han-Kobayashi rate splitting scheme. As a novel aspect, the inner bound takes
advantage of the common message acting as common randomness for the private
message; hence, the jammer cannot symmetrize only the private codeword without
being detected. This complication requires an extra condition on the signal
power, so that in general our inner bound is not identical to the Han-Kobayashi
bound. We also prove a new variation of the packing lemma that applies for
multiple Gaussian codebooks in an adversarial setting.Comment: It has been submitted to the IEEE transactions on information theor
Strong Converses Are Just Edge Removal Properties
This paper explores the relationship between two ideas in network information
theory: edge removal and strong converses. Edge removal properties state that
if an edge of small capacity is removed from a network, the capacity region
does not change too much. Strong converses state that, for rates outside the
capacity region, the probability of error converges to 1 as the blocklength
goes to infinity. Various notions of edge removal and strong converse are
defined, depending on how edge capacity and error probability scale with
blocklength, and relations between them are proved. Each class of strong
converse implies a specific class of edge removal. The opposite directions are
proved for deterministic networks. Furthermore, a technique based on a novel,
causal version of the blowing-up lemma is used to prove that for discrete
memoryless networks, the weak edge removal property--that the capacity region
changes continuously as the capacity of an edge vanishes--is equivalent to the
exponentially strong converse--that outside the capacity region, the
probability of error goes to 1 exponentially fast. This result is used to prove
exponentially strong converses for several examples, including the discrete
2-user interference channel with strong interference, with only a small
variation from traditional weak converse proofs.Comment: (v4) Addition of Table I clarifying notation, corrected proof of
Proposition 3, and other minor improvement
Equivalence for Networks with Adversarial State
We address the problem of finding the capacity of noisy networks with either
independent point-to-point compound channels (CC) or arbitrarily varying
channels (AVC). These channels model the presence of a Byzantine adversary
which controls a subset of links or nodes in the network. We derive equivalence
results showing that these point-to-point channels with state can be replaced
by noiseless bit-pipes without changing the network capacity region. Exact
equivalence results are found for the CC model, and for some instances of the
AVC, including all nonsymmetrizable AVCs. These results show that a feedback
path between the output and input of a CC can increase the equivalent capacity,
and that if common randomness can be established between the terminals of an
AVC (either by feedback, a forward path, or via a third-party node), then again
the equivalent capacity can increase. This leads to an observation that
deleting an edge of arbitrarily small capacity can cause a significant change
in network capacity. We also analyze an example involving an AVC for which no
fixed-capacity bit-pipe is equivalent.Comment: 40 pages, 6 figures. To appear in IEEE Transactions in Information
Theor
Variable-Rate Distributed Source Coding in the Presence of Byzantine Sensors
The distributed source coding problem is considered when the sensors, or
encoders, are under Byzantine attack; that is, an unknown number of sensors
have been reprogrammed by a malicious intruder to undermine the reconstruction
at the fusion center. Three different forms of the problem are considered. The
first is a variable-rate setup, in which the decoder adaptively chooses the
rates at which the sensors transmit. An explicit characterization of the
variable-rate minimum achievable sum rate is stated, given by the maximum
entropy over the set of distributions indistinguishable from the true source
distribution by the decoder. In addition, two forms of the fixed-rate problem
are considered, one with deterministic coding and one with randomized coding.
The achievable rate regions are given for both these problems, with a larger
region achievable using randomized coding, though both are suboptimal compared
to variable-rate coding.Comment: 5 pages, submitted to ISIT 200
Capacity of Cooperative Fusion in the Presence of Byzantine Sensors
The problem of cooperative fusion in the presence of Byzantine sensors is
considered. An information theoretic formulation is used to characterize the
Shannon capacity of sensor fusion. It is shown that when less than half of the
sensors are Byzantine, the effect of Byzantine attack can be entirely
mitigated, and the fusion capacity is identical to that when all sensors are
honest. But when at least half of the sensors are Byzantine, they can
completely defeat the sensor fusion so that no information can be transmitted
reliably. A capacity achieving transmit-then-verify strategy is proposed for
the case that less than half of the sensors are Byzantine, and its error
probability and coding rate is analyzed by using a Markov decision process
modeling of the transmission protocol.Comment: 8 pages, 2 figure
Asymptotics and Non-asymptotics for Universal Fixed-to-Variable Source Coding
Universal fixed-to-variable lossless source coding for memoryless sources is
studied in the finite blocklength and higher-order asymptotics regimes. Optimal
third-order coding rates are derived for general fixed-to-variable codes and
for prefix codes. It is shown that the non-prefix Type Size code, in which
codeword lengths are chosen in ascending order of type class size, achieves the
optimal third-order rate and outperforms classical Two-Stage codes. Converse
results are proved making use of a result on the distribution of the empirical
entropy and Laplace's approximation. Finally, the fixed-to-variable coding
problem without a prefix constraint is shown to be essentially the same as the
universal guessing problem.Comment: 32 pages, 1 figure. Submitted to IEEE Transactions on Information
Theory, Dec. 201
Fundamental Limits of Universal Variable-to-Fixed Length Coding of Parametric Sources
Universal variable-to-fixed (V-F) length coding of -dimensional
exponential family of distributions is considered. We propose an achievable
scheme consisting of a dictionary, used to parse the source output stream,
making use of the previously-introduced notion of quantized types. The
quantized type class of a sequence is based on partitioning the space of
minimal sufficient statistics into cuboids. Our proposed dictionary consists of
sequences in the boundaries of transition from low to high quantized type class
size. We derive the asymptotics of the -coding rate of our coding
scheme for large enough dictionaries. In particular, we show that the
third-order coding rate of our scheme is , where is the entropy of the source and is the dictionary size. We
further provide a converse, showing that this rate is optimal up to the
third-order term
Information-Theoretic Privacy with General Distortion Constraints
The privacy-utility tradeoff problem is formulated as determining the privacy
mechanism (random mapping) that minimizes the mutual information (a metric for
privacy leakage) between the private features of the original dataset and a
released version. The minimization is studied with two types of constraints on
the distortion between the public features and the released version of the
dataset: (i) subject to a constraint on the expected value of a cost function
applied to the distortion, and (ii) subject to bounding the complementary
CDF of the distortion by a non-increasing function . The first scenario
captures various practical cost functions for distorted released data, while
the second scenario covers large deviation constraints on utility. The
asymptotic optimal leakage is derived in both scenarios. For the distortion
cost constraint, it is shown that for convex cost functions there is no
asymptotic loss in using stationary memoryless mechanisms. For the
complementary CDF bound on distortion, the asymptotic leakage is derived for
general mechanisms and shown to be the integral of the single letter leakage
function with respect to the Lebesgue---Stieltjes measure defined based on the
refined bound on distortion. However, it is shown that memoryless mechanisms
are generally suboptimal in both cases
A Second-Order Converse Bound for the Multiple-Access Channel via Wringing Dependence
A new converse bound is presented for the two-user multiple-access channel
under the average probability of error constraint. This bound shows that for
most channels of interest, the second-order coding rate---that is, the
difference between the best achievable rates and the asymptotic capacity region
as a function of blocklength with fixed probability of error---is
bits per channel use. The principal tool behind this converse
proof is a new measure of dependence between two random variables called
wringing dependence, as it is inspired by Ahlswede's wringing technique. The
gap is shown to hold for any channel satisfying certain
regularity conditions, which includes all discrete-memoryless channels and the
Gaussian multiple-access channel. Exact upper bounds as a function of the
probability of error are proved for the coefficient in the
term, although for most channels they do not match existing achievable bounds.Comment: 38 pages, 3 figure
Vulnerability Analysis and Consequences of False Data Injection Attack on Power System State Estimation
An unobservable false data injection (FDI) attack on AC state estimation (SE)
is introduced and its consequences on the physical system are studied. With a
focus on understanding the physical consequences of FDI attacks, a bi-level
optimization problem is introduced whose objective is to maximize the physical
line flows subsequent to an FDI attack on DC SE. The maximization is subject to
constraints on both attacker resources (size of attack) and attack detection
(limiting load shifts) as well as those required by DC optimal power flow (OPF)
following SE. The resulting attacks are tested on a more realistic non-linear
system model using AC state estimation and ACOPF, and it is shown that, with an
appropriately chosen sub-network, the attacker can overload transmission lines
with moderate shifts of load.Comment: 9 pages, 7 figures. A version of this manuscript was submitted to the
IEEE Transactions on Power System
- β¦